Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
PLoS Comput Biol ; 19(10): e1011465, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37847724

RESUMEN

This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properties of experience in physical (operational) terms. It identifies the essential properties of experience (axioms), infers the necessary and sufficient properties that its substrate must satisfy (postulates), and expresses them in mathematical terms. In principle, the postulates can be applied to any system of units in a state to determine whether it is conscious, to what degree, and in what way. IIT offers a parsimonious explanation of empirical evidence, makes testable predictions concerning both the presence and the quality of experience, and permits inferences and extrapolations. IIT 4.0 incorporates several developments of the past ten years, including a more accurate formulation of the axioms as postulates and mathematical expressions, the introduction of a unique measure of intrinsic information that is consistent with the postulates, and an explicit assessment of causal relations. By fully unfolding a system's irreducible cause-effect power, the distinctions and relations specified by a substrate can account for the quality of experience.


Asunto(s)
Encéfalo , Teoría de la Información , Modelos Neurológicos , Estado de Conciencia
2.
Entropy (Basel) ; 25(10)2023 Oct 12.
Artículo en Inglés | MEDLINE | ID: mdl-37895563

RESUMEN

In response to a comment by Chris Rourk on our article Computing the Integrated Information of a Quantum Mechanism, we briefly (1) consider the role of potential hybrid/classical mechanisms from the perspective of integrated information theory (IIT), (2) discuss whether the (Q)IIT formalism needs to be extended to capture the hypothesized hybrid mechanism, and (3) clarify our motivation for developing a QIIT formalism and its scope of applicability.

3.
PLoS Comput Biol ; 19(10): e1011346, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37862364

RESUMEN

The Free Energy Principle (FEP) and Integrated Information Theory (IIT) are two ambitious theoretical approaches. The first aims to make a formal framework for describing self-organizing and life-like systems in general, and the second attempts a mathematical theory of conscious experience based on the intrinsic properties of a system. They are each concerned with complementary aspects of the properties of systems, one with life and behavior, the other with meaning and experience, so combining them has potential for scientific value. In this paper, we take a first step towards such a synthesis by expanding on the results of an earlier published evolutionary simulation study, which show a relationship between IIT-measures and fitness in differing complexities of tasks. We relate a basic information theoretic measure from the FEP, surprisal, to this result, finding that the surprisal of simulated agents' observations is inversely related to the general increase in fitness and integration over evolutionary time. Moreover, surprisal fluctuates together with IIT-based consciousness measures in within-trial time. This suggests that the consciousness measures used in IIT indirectly depend on the relation between the agent and the external world, and that it should therefore be possible to relate them to the theoretical concepts used in the FEP. Lastly, we suggest a future approach for investigating this relationship empirically.


Asunto(s)
Encéfalo , Teoría de la Información , Modelos Neurológicos , Estado de Conciencia , Simulación por Computador
4.
Entropy (Basel) ; 25(3)2023 Mar 03.
Artículo en Inglés | MEDLINE | ID: mdl-36981337

RESUMEN

Originally conceived as a theory of consciousness, integrated information theory (IIT) provides a theoretical framework intended to characterize the compositional causal information that a system, in its current state, specifies about itself. However, it remains to be determined whether IIT as a theory of consciousness is compatible with quantum mechanics as a theory of microphysics. Here, we present an extension of IIT's latest formalism to evaluate the mechanism integrated information (φ) of a system subset to discrete, finite-dimensional quantum systems (e.g., quantum logic gates). To that end, we translate a recently developed, unique measure of intrinsic information into a density matrix formulation and extend the notion of conditional independence to accommodate quantum entanglement. The compositional nature of the IIT analysis might shed some light on the internal structure of composite quantum states and operators that cannot be obtained using standard information-theoretical analysis. Finally, our results should inform theoretical arguments about the link between consciousness, causation, and physics from the classical to the quantum.

5.
Entropy (Basel) ; 25(2)2023 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-36832700

RESUMEN

Integrated information theory (IIT) starts from consciousness itself and identifies a set of properties (axioms) that are true of every conceivable experience. The axioms are translated into a set of postulates about the substrate of consciousness (called a complex), which are then used to formulate a mathematical framework for assessing both the quality and quantity of experience. The explanatory identity proposed by IIT is that an experience is identical to the cause-effect structure unfolded from a maximally irreducible substrate (a Φ-structure). In this work we introduce a definition for the integrated information of a system (φs) that is based on the existence, intrinsicality, information, and integration postulates of IIT. We explore how notions of determinism, degeneracy, and fault lines in the connectivity impact system-integrated information. We then demonstrate how the proposed measure identifies complexes as systems, the φs of which is greater than the φs of any overlapping candidate systems.

6.
Behav Brain Sci ; 45: e42, 2022 03 23.
Artículo en Inglés | MEDLINE | ID: mdl-35319431

RESUMEN

To be true of every experience, the axioms of Integrated information theory (IIT) are necessarily basic properties and should not be "over-psychologized." Information, for example, merely asserts that experience is specific, not generic. It does not require "access." The information a system specifies about itself in its current state is revealed by its unfolded cause-effect structure and quantified by its integrated information.


Asunto(s)
Estado de Conciencia , Teoría de la Información , Humanos
7.
Entropy (Basel) ; 23(11)2021 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-34828113

RESUMEN

Should the internal structure of a system matter when it comes to autonomy? While there is still no consensus on a rigorous, quantifiable definition of autonomy, multiple candidate measures and related quantities have been proposed across various disciplines, including graph-theory, information-theory, and complex system science. Here, I review and compare a range of measures related to autonomy and intelligent behavior. To that end, I analyzed the structural, information-theoretical, causal, and dynamical properties of simple artificial agents evolved to solve a spatial navigation task, with or without a need for associative memory. By contrast to standard artificial neural networks with fixed architectures and node functions, here, independent evolution simulations produced successful agents with diverse neural architectures and functions. This makes it possible to distinguish quantities that characterize task demands and input-output behavior, from those that capture intrinsic differences between substrates, which may help to determine more stringent requisites for autonomous behavior and the means to measure it.

8.
Neurosci Conscious ; 2021(2): niab032, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34667639

RESUMEN

Objective correlates-behavioral, functional, and neural-provide essential tools for the scientific study of consciousness. But reliance on these correlates should not lead to the 'fallacy of misplaced objectivity': the assumption that only objective properties should and can be accounted for objectively through science. Instead, what needs to be explained scientifically is what experience is intrinsically-its subjective properties-not just what we can do with it extrinsically. And it must be explained; otherwise the way experience feels would turn out to be magical rather than physical. We argue that it is possible to account for subjective properties objectively once we move beyond cognitive functions and realize what experience is and how it is structured. Drawing on integrated information theory, we show how an objective science of the subjective can account, in strictly physical terms, for both the essential properties of every experience and the specific properties that make particular experiences feel the way they do.

9.
Nat Neurosci ; 24(10): 1348-1355, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34556868

RESUMEN

Causal reductionism is the widespread assumption that there is no room for additional causes once we have accounted for all elementary mechanisms within a system. Due to its intuitive appeal, causal reductionism is prevalent in neuroscience: once all neurons have been caused to fire or not to fire, it seems that causally there is nothing left to be accounted for. Here, we argue that these reductionist intuitions are based on an implicit, unexamined notion of causation that conflates causation with prediction. By means of a simple model organism, we demonstrate that causal reductionism cannot provide a complete and coherent account of 'what caused what'. To that end, we outline an explicit, operational approach to analyzing causal structures.


Asunto(s)
Causalidad , Neurociencias/tendencias , Filosofía , Animales , Anuros/fisiología , Predicción , Neuronas/fisiología , Especificidad de la Especie
10.
Entropy (Basel) ; 23(3)2021 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-33803765

RESUMEN

The Integrated Information Theory (IIT) of consciousness starts from essential phenomenological properties, which are then translated into postulates that any physical system must satisfy in order to specify the physical substrate of consciousness. We recently introduced an information measure (Barbosa et al., 2020) that captures three postulates of IIT-existence, intrinsicality and information-and is unique. Here we show that the new measure also satisfies the remaining postulates of IIT-integration and exclusion-and create the framework that identifies maximally irreducible mechanisms. These mechanisms can then form maximally irreducible systems, which in turn will specify the physical substrate of conscious experience.

11.
Entropy (Basel) ; 23(1)2020 Dec 22.
Artículo en Inglés | MEDLINE | ID: mdl-33375068

RESUMEN

Integrated information theory (IIT) provides a mathematical framework to characterize the cause-effect structure of a physical system and its amount of integrated information (Φ). An accompanying Python software package ("PyPhi") was recently introduced to implement this framework for the causal analysis of discrete dynamical systems of binary elements. Here, we present an update to PyPhi that extends its applicability to systems constituted of discrete, but multi-valued elements. This allows us to analyze and compare general causal properties of random networks made up of binary, ternary, quaternary, and mixed nodes. Moreover, we apply the developed tools for causal analysis to a simple non-binary regulatory network model (p53-Mdm2) and discuss commonly used binarization methods in light of their capacity to preserve the causal structure of the original system with multi-valued elements.

12.
Sci Rep ; 10(1): 18803, 2020 11 02.
Artículo en Inglés | MEDLINE | ID: mdl-33139829

RESUMEN

We introduce an information measure that reflects the intrinsic perspective of a receiver or sender of a single symbol, who has no access to the communication channel and its source or target. The measure satisfies three desired properties-causality, specificity, intrinsicality-and is shown to be unique. Causality means that symbols must be transmitted with probability greater than chance. Specificity means that information must be transmitted by an individual symbol. Intrinsicality means that a symbol must be taken as such and cannot be decomposed into signal and noise. It follows that the intrinsic information carried by a specific symbol increases if the repertoire of symbols increases without noise (expansion) and decreases if it does so without signal (dilution). An optimal balance between expansion and dilution is relevant for systems whose elements must assess their inputs and outputs from the intrinsic perspective, such as neurons in a network.

13.
PLoS One ; 15(2): e0228879, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32032380

RESUMEN

Evolving in groups can either enhance or reduce an individual's task performance. Still, we know little about the factors underlying group performance, which may be reduced to three major dimensions: (a) the individual's ability to perform a task, (b) the dependency on environmental conditions, and (c) the perception of, and the reaction to, other group members. In our research, we investigated how these dimensions interrelate in simulated evolution experiments using adaptive agents equipped with Markov brains ("animats"). We evolved the animats to perform a spatial-navigation task under various evolutionary setups. The last generation of each evolution simulation was tested across modified conditions to evaluate and compare the animats' reliability when faced with change. Moreover, the complexity of the evolved Markov brains was assessed based on measures of information integration. We found that, under the right conditions, specialized animats could be as reliable as animats already evolved for the modified tasks, and that reliability across varying group sizes correlated with evolved fitness in most tested evolutionary setups. Our results moreover suggest that balancing the number of individuals in a group may lead to higher reliability but also lower individual performance. Besides, high brain complexity was associated with balanced group sizes and, thus, high reliability under limited sensory capacity. However, additional sensors allowed for even higher reliability across modified environments without a need for complex, integrated Markov brains. Despite complex dependencies between the individual, the group, and the environment, our computational approach provides a way to study reliability in group behavior under controlled conditions. In all, our study revealed that balancing the group size and individual cognitive abilities prevents over-specialization and can help to evolve better reliability under unknown environmental situations.


Asunto(s)
Evolución Biológica , Cognición , Simulación por Computador , Animales , Encéfalo/fisiología , Ambiente , Aptitud Genética , Humanos , Inteligencia , Cadenas de Markov , Memoria , Modelos Biológicos , Densidad de Población , Medio Social , Análisis y Desempeño de Tareas
14.
Entropy (Basel) ; 21(5)2019 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-33267173

RESUMEN

Actual causation is concerned with the question: "What caused what?" Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which image features caused the classifier to misinterpret the picture? Even detailed knowledge of the system's causal network, its elements, their states, connectivity, and dynamics does not automatically provide a straightforward answer to the "what caused what?" question. Counterfactual accounts of actual causation, based on graphical models paired with system interventions, have demonstrated initial success in addressing specific problem cases, in line with intuitive causal judgments. Here, we start from a set of basic requirements for causation (realization, composition, information, integration, and exclusion) and develop a rigorous, quantitative account of actual causation, that is generally applicable to discrete dynamical systems. We present a formal framework to evaluate these causal requirements based on system interventions and partitions, which considers all counterfactuals of a state transition. This framework is used to provide a complete causal account of the transition by identifying and quantifying the strength of all actual causes and effects linking the two consecutive system states. Finally, we examine several exemplary cases and paradoxes of causation and show that they can be illuminated by the proposed framework for quantifying actual causation.

15.
PLoS Comput Biol ; 14(7): e1006343, 2018 07.
Artículo en Inglés | MEDLINE | ID: mdl-30048445

RESUMEN

Integrated information theory provides a mathematical framework to fully characterize the cause-effect structure of a physical system. Here, we introduce PyPhi, a Python software package that implements this framework for causal analysis and unfolds the full cause-effect structure of discrete dynamical systems of binary elements. The software allows users to easily study these structures, serves as an up-to-date reference implementation of the formalisms of integrated information theory, and has been applied in research on complexity, emergence, and certain biological questions. We first provide an overview of the main algorithm and demonstrate PyPhi's functionality in the course of analyzing an example system, and then describe details of the algorithm's design and implementation. PyPhi can be installed with Python's package manager via the command 'pip install pyphi' on Linux and macOS systems equipped with Python 3.4 or higher. PyPhi is open-source and licensed under the GPLv3; the source code is hosted on GitHub at https://github.com/wmayner/pyphi. Comprehensive and continually-updated documentation is available at https://pyphi.readthedocs.io. The pyphi-users mailing list can be joined at https://groups.google.com/forum/#!forum/pyphi-users. A web-based graphical interface to the software is available at http://integratedinformationtheory.org/calculate.html.


Asunto(s)
Simulación por Computador , Teoría de la Información , Diseño de Software , Algoritmos , Cadenas de Markov , Interfaz Usuario-Computador
16.
PLoS Comput Biol ; 14(4): e1006114, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29684020

RESUMEN

Reductionism assumes that causation in the physical world occurs at the micro level, excluding the emergence of macro-level causation. We challenge this reductionist assumption by employing a principled, well-defined measure of intrinsic cause-effect power-integrated information (Φ), and showing that, according to this measure, it is possible for a macro level to "beat" the micro level. Simple systems were evaluated for Φ across different spatial and temporal scales by systematically considering all possible black boxes. These are macro elements that consist of one or more micro elements over one or more micro updates. Cause-effect power was evaluated based on the inputs and outputs of the black boxes, ignoring the internal micro elements that support their input-output function. We show how black-box elements can have more common inputs and outputs than the corresponding micro elements, revealing the emergence of high-order mechanisms and joint constraints that are not apparent at the micro level. As a consequence, a macro, black-box system can have higher Φ than its micro constituents by having more mechanisms (higher composition) that are more interconnected (higher integration). We also show that, for a given micro system, one can identify local maxima of Φ across several spatiotemporal scales. The framework is demonstrated on a simple biological system, the Boolean network model of the fission-yeast cell-cycle, for which we identify stable local maxima during the course of its simulated biological function. These local maxima correspond to macro levels of organization at which emergent cause-effect properties of physical systems come into focus, and provide a natural vantage point for scientific inquiries.


Asunto(s)
Biología de Sistemas/estadística & datos numéricos , Ciclo Celular , Biología Computacional , Simulación por Computador , Modelos Biológicos , Schizosaccharomyces/citología , Teoría de Sistemas
17.
Philos Trans A Math Phys Eng Sci ; 375(2109)2017 Dec 28.
Artículo en Inglés | MEDLINE | ID: mdl-29133455

RESUMEN

Standard techniques for studying biological systems largely focus on their dynamical or, more recently, their informational properties, usually taking either a reductionist or holistic perspective. Yet, studying only individual system elements or the dynamics of the system as a whole disregards the organizational structure of the system-whether there are subsets of elements with joint causes or effects, and whether the system is strongly integrated or composed of several loosely interacting components. Integrated information theory offers a theoretical framework to (1) investigate the compositional cause-effect structure of a system and to (2) identify causal borders of highly integrated elements comprising local maxima of intrinsic cause-effect power. Here we apply this comprehensive causal analysis to a Boolean network model of the fission yeast (Schizosaccharomyces pombe) cell cycle. We demonstrate that this biological model features a non-trivial causal architecture, whose discovery may provide insights about the real cell cycle that could not be gained from holistic or reductionist approaches. We also show how some specific properties of this underlying causal architecture relate to the biological notion of autonomy. Ultimately, we suggest that analysing the causal organization of a system, including key features like intrinsic control and stable causal borders, should prove relevant for distinguishing life from non-life, and thus could also illuminate the origin of life problem.This article is part of the themed issue 'Reconceptualizing the origins of life'.


Asunto(s)
Modelos Biológicos , Schizosaccharomyces/citología , Ciclo Celular
18.
Cereb Cortex ; 26(8): 3611-26, 2016 08.
Artículo en Inglés | MEDLINE | ID: mdl-27269960

RESUMEN

How do you make a decision if you do not know the rules of the game? Models of sensory decision-making suggest that choices are slow if evidence is weak, but they may only apply if the subject knows the task rules. Here, we asked how the learning of a new rule influences neuronal activity in the visual (area V1) and frontal cortex (area FEF) of monkeys. We devised a new icon-selection task. On each day, the monkeys saw 2 new icons (small pictures) and learned which one was relevant. We rewarded eye movements to a saccade target connected to the relevant icon with a curve. Neurons in visual and frontal cortex coded the monkey's choice, because the representation of the selected curve was enhanced. Learning delayed the neuronal selection signals and we uncovered the cause of this delay in V1, where learning to select the relevant icon caused an early suppression of surrounding image elements. These results demonstrate that the learning of a new rule causes a transition from fast and random decisions to a more considerate strategy that takes additional time and they reveal the contribution of visual and frontal cortex to the learning process.


Asunto(s)
Lóbulo Frontal/fisiología , Aprendizaje/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Potenciales de Acción , Animales , Conducta de Elección/fisiología , Medidas del Movimiento Ocular , Haplorrinos , Microelectrodos , Neuronas/fisiología , Recompensa , Movimientos Sacádicos/fisiología
19.
J Neurophysiol ; 115(4): 2199-213, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26843602

RESUMEN

Recent evidence suggests that synaptic refinement, the reorganization of synapses and connections without significant change in their number or strength, is important for the development of the visual system of juvenile rodents. Other evidence in rodents and humans shows that there is a marked drop in sleep slow-wave activity (SWA) during adolescence. Slow waves reflect synchronous transitions of neuronal populations between active and inactive states, and the amount of SWA is influenced by the connection strength and organization of cortical neurons. In this study, we investigated whether synaptic refinement could account for the observed developmental drop in SWA. To this end, we employed a large-scale neural model of primary visual cortex and sections of the thalamus, capable of producing realistic slow waves. In this model, we reorganized intralaminar connections according to experimental data on synaptic refinement: during prerefinement, local connections between neurons were homogenous, whereas in postrefinement, neurons connected preferentially to neurons with similar receptive fields and preferred orientations. Synaptic refinement led to a drop in SWA and to changes in slow-wave morphology, consistent with experimental data. To test whether learning can induce synaptic refinement, intralaminar connections were equipped with spike timing-dependent plasticity. Oriented stimuli were presented during a learning period, followed by homeostatic synaptic renormalization. This led to activity-dependent refinement accompanied again by a decline in SWA. Together, these modeling results show that synaptic refinement can account for developmental changes in SWA. Thus sleep SWA may be used to track noninvasively the reorganization of cortical connections during development.


Asunto(s)
Ondas Encefálicas , Modelos Neurológicos , Sueño , Potenciales Sinápticos , Animales , Humanos , Neurogénesis , Neuronas/fisiología , Tálamo/citología , Tálamo/crecimiento & desarrollo , Tálamo/fisiología , Corteza Visual/citología , Corteza Visual/crecimiento & desarrollo , Corteza Visual/fisiología
20.
Neurosci Conscious ; 2016(1): niw012, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-30788150

RESUMEN

Causal interactions within complex systems such as the brain can be analyzed at multiple spatiotemporal levels. It is widely assumed that the micro level is causally complete, thus excluding causation at the macro level. However, by measuring effective information-how much a system's mechanisms constrain its past and future states-we recently showed that causal power can be stronger at macro rather than micro levels. In this work, we go beyond effective information and consider additional requirements of a proper measure of causal power from the intrinsic perspective of a system: composition (the cause-effect power of the parts), state-dependency (the cause-effect power of the system in a specific state); integration (the causal irreducibility of the whole to its parts), and exclusion (the causal borders of the system). A measure satisfying these requirements, called Φ Max, was developed in the context of integrated information theory. Here, we evaluate Φ Max systematically at micro and macro levels in space and time using simplified neuronal-like systems. We show that for systems characterized by indeterminism and/or degeneracy, Φ can indeed peak at a macro level. This happens if coarse-graining micro elements produces macro mechanisms with high irreducible causal selectivity. These results are relevant to a theoretical account of consciousness, because for integrated information theory the spatiotemporal maximum of integrated information fixes the spatiotemporal scale of consciousness. More generally, these results show that the notions of macro causal emergence and micro causal exclusion hold when causal power is assessed in full and from the intrinsic perspective of a system.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...